Lindsey Raymond

My research studies the impacts of digitization, machine learning, and artificial intelligence on workers, markets, and organizations. From July 2021 to July 2022, I was on leave as a Staff Economist at the Council of Economic Advisers. 

Starting in Summer 2024, I will be a postdoctoral fellow at Microsoft Research. I will join the Harvard Business School as an Assistant Professor in the Fall of 2025. 


Google Scholar

Email: lraymond@mit.edu

Working Papers

The Market Effects of Algorithms (job market paper)

While there is excitement about the potential of algorithms to optimize individual decision-making, changing individual behavior will, almost inevitably, impact markets. Yet little is known about these effects. In this paper, I study how the availability of algorithmic prediction changes entry, allocation, and prices in the U.S. residential real estate market, a key driver of household wealth. I identify a market-level natural experiment that generates variation in the cost of using algorithms to value houses: digitization, the transition from physical to digital housing records. I show that digitization leads to entry by investors using algorithms, but does not displace investors using human judgment. Instead, human investors shift towards houses that are difficult to predict algorithmically. Algorithm-using investors predominantly purchase minority-owned homes, an area where humans may be biased. Digitization increases the average sale price of minority-owned homes by 5% or $5,000 and nearly eliminates racial disparities in home prices. Algorithmic investors, via competition, affect the prices paid by humans, which drives most of the reduction in racial disparities.  This decrease in racial inequality underscores the potential of algorithms to mitigate human biases at the market level.

Generative AI at Work, with Erik Brynjolfsson and Danielle Li (arXiv link)

Revise and Resubmit, Quarterly Journal of Economics

Selected Media: Axios, Bloomberg Businessweek, Business Insider, Exponential View, Fortune, New York Times, NPR Planet Money, NPR Marketplace 

New AI tools have the potential to change the way workers perform and learn, but little is known about their impacts on the job. In this paper, we study the staggered introduction of a generative AI-based conversational assistant using data from 5,179 customer support agents. Access to the tool increases productivity, as measured by issues resolved per hour, by 14% on average, including a 35% improvement for novice and low-skilled workers but with minimal impact on experienced and highly skilled workers. We provide suggestive evidence that the AI model disseminates the best practices of more able workers and helps newer workers move down the experience curve. In addition, we find that AI assistance improves customer sentiment, increases employee retention, and may lead to worker learning. Our results suggest that access to generative AI can increase productivity, with large heterogeneity in effects across workers.

Hiring as Exploration, with Danielle Li and Peter Bergman

Revise and Resubmit, Review of Economic Studies

Selected Media: Axios, Bloomberg Businessweek, Business Insider, Fast Company, MIT News

This paper views hiring as a contextual bandit problem: to find the best workers over time, firms must balance “exploitation” (selecting from groups with proven track records) with “exploration” (selecting from under-represented groups to learn about quality). Yet modern hiring algorithms, based on “supervised learning” approaches, are designed solely for exploitation. Instead, we build a resume screening algorithm that values exploration by evaluating candidates according to their statistical upside potential. Using data from professional services recruiting within a Fortune 500 firm, we show that this approach improves the quality (as measured by eventual hiring rates) of candidates selected for an interview, while also increasing demographic diversity, relative to the firm's existing practices. The same is not true for traditional supervised learning based algorithms, which improve hiring rates but select far fewer Black and Hispanic applicants. In an extension, we show that exploration-based algorithms are also able to learn more effectively about simulated changes in applicant hiring potential over time. Together, our results highlight the importance of incorporating exploration in developing decision-making algorithms that are potentially both more efficient and equitable.